29 research outputs found

    Making Cooperative Extension Work for Southern Nevada: Fulfilling UNLV\u27s Urban Land Grant Mission

    Full text link
    The Lincy Institute and Brookings Mountain West at UNLV are pleased to host a colloquium entitled, “Making Cooperative Extension Work for Southern Nevada: Fulfilling UNLV’s Urban Land Grant Mission.” The event will explore ways to rethink and reform County Cooperative Extension so that it is relevant to the modern metropolis that is the Las Vegas area. The colloquium will feature research presentations that examine County Cooperative Extension from social, economic, and operational perspectives

    ELVIS: Entertainment-led video summaries

    Get PDF
    © ACM, 2010. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Multimedia Computing, Communications, and Applications, 6(3): Article no. 17 (2010) http://doi.acm.org/10.1145/1823746.1823751Video summaries present the user with a condensed and succinct representation of the content of a video stream. Usually this is achieved by attaching degrees of importance to low-level image, audio and text features. However, video content elicits strong and measurable physiological responses in the user, which are potentially rich indicators of what video content is memorable to or emotionally engaging for an individual user. This article proposes a technique that exploits such physiological responses to a given video stream by a given user to produce Entertainment-Led VIdeo Summaries (ELVIS). ELVIS is made up of five analysis phases which correspond to the analyses of five physiological response measures: electro-dermal response (EDR), heart rate (HR), blood volume pulse (BVP), respiration rate (RR), and respiration amplitude (RA). Through these analyses, the temporal locations of the most entertaining video subsegments, as they occur within the video stream as a whole, are automatically identified. The effectiveness of the ELVIS technique is verified through a statistical analysis of data collected during a set of user trials. Our results show that ELVIS is more consistent than RANDOM, EDR, HR, BVP, RR and RA selections in identifying the most entertaining video subsegments for content in the comedy, horror/comedy, and horror genres. Subjective user reports also reveal that ELVIS video summaries are comparatively easy to understand, enjoyable, and informative

    Rethinking Cooperative Extension in Southern Nevada

    Full text link
    Cooperative Extension is a partnership funded by federal, state, and county governments that extends University of Nevada services to Nevadans. As the original branch of Nevada’s land-grant institution, the University of Nevada, Reno (UNR) has administered Cooperative Extension Service (CES) since the program’s inception over a century ago. However, as currently organized, CES has limited presence in Southern Nevada and it has not developed programming commensurate with Clark County’s tax contribution to the CES budget. We propose that CES in Southern Nevada be managed by the University of Nevada, Las Vegas (UNLV). As we show, UNLV is already the most connected and active non-profit organization in the region. The campus currently delivers a host of services and programs that are consistent with CES’s mission, despite receiving no direct funding to support these activities

    Affective Man-Machine Interface: Unveiling human emotions through biosignals

    Get PDF
    As is known for centuries, humans exhibit an electrical profile. This profile is altered through various psychological and physiological processes, which can be measured through biosignals; e.g., electromyography (EMG) and electrodermal activity (EDA). These biosignals can reveal our emotions and, as such, can serve as an advanced man-machine interface (MMI) for empathic consumer products. However, such a MMI requires the correct classification of biosignals to emotion classes. This chapter starts with an introduction on biosignals for emotion detection. Next, a state-of-the-art review is presented on automatic emotion classification. Moreover, guidelines are presented for affective MMI. Subsequently, a research is presented that explores the use of EDA and three facial EMG signals to determine neutral, positive, negative, and mixed emotions, using recordings of 21 people. A range of techniques is tested, which resulted in a generic framework for automated emotion classification with up to 61.31% correct classification of the four emotion classes, without the need of personal profiles. Among various other directives for future research, the results emphasize the need for parallel processing of multiple biosignals

    Emotional ratings and skin conductance response to visual, auditory and haptic stimuli

    Get PDF
    The human emotional reactions to stimuli delivered by different sensory modalities is a topic of interest for many disciplines, from Human-Computer-Interaction to cognitive sciences. Different databases of stimuli eliciting emotional reaction are available, tested on a high number of participants. Interestingly, stimuli within one database are always of the same type. In other words, to date, no data was obtained and compared from distinct types of emotion-eliciting stimuli from the same participant. This makes it difficult to use different databases within the same experiment, limiting the complexity of experiments investigating emotional reactions. Moreover, whereas the stimuli and the participants’ rating to the stimuli are available, physiological reactions of participants to the emotional stimuli are often recorded but not shared. Here, we test stimuli delivered either through a visual, auditory, or haptic modality in a within participant experimental design. We provide the results of our study in the form of a MATLAB structure including basic demographics on the participants, the participant’s self-assessment of his/her emotional state, and his/her physiological reactions (i.e., skin conductance)

    Developing Multimodal Intelligent Affective Interfaces For Tele-Home Health Care

    No full text
    Accounting for a patient\u27s emotional state is integral in medical care. Tele-health research attests to the challenge clinicians must overcome in assessing patient emotional state when modalities are limited (J. Adv. Nurs. 36(5) 668). The extra effort involved in addressing this challenge requires attention, skill, and time. Large caseloads may not afford tele-home health-care (tele-HHC) clinicians the time and focus necessary to accurately assess emotional states and trends. Unstructured interviews with experienced tele-HHC providers support the introduction of objective indicators of patients\u27 emotional status in a useful form to enhance patient care. We discuss our contribution to addressing this challenge, which involves building user models not only of the physical characteristics of users - in our case patients - but also models of their emotions. We explain our research in progress on Affective Computing for tele-HHC applications, which includes: developing a system architecture for monitoring and responding to human multimodal affect and emotions via multimedia and empathetic avatars; mapping of physiological signals to emotions and synthesizing the patient\u27s affective information for the health-care provider. Our results using a wireless non-invasive wearable computer to collect physiological signals and mapping these to emotional states show the feasibility of our approach, for which we lastly discuss the future research issues that we have identified. © 2003 Elsevier Science Ltd. All rights reserved

    Toward a generic architecture for UI adaptation to emotions

    Get PDF
    TEC - Travaux en CoursInternational audienceAdapting at runtime user interfaces is a well-known requirement in human computer interaction which becomes a very challenging task when taking into account dynamic user properties such as emotions. To address the question of adapting user interfaces to emotions, we propose Perso2u is an architecture to personalize user interfaces with user emotions at runtime. This approach relies on emotion recognition tools which raises the question of accuracy This paper aims at showing that it is possible to obtain similar emotion results from several tools based on face recognition to emphasize the independence of the emotion inferring engine and more globally of the architecture. To achieve this goal, this paper reports on the results of an experiment to compare three emotion detection tools.L’adaptation des interfaces homme-machine à l’exécution est une problématique bien connue en interaction homme-machine qui devient encore plus prégnante lors qu’il s’agit de prendre en compte des caractéristiques dynamiques de l’utilisateur telles que les émotions. En guise de contribution à l’adaptation des interfaces aux émotions, nous proposons Perso2U, une architecture de personnalisation à l’exécution. Cette approche s’appuie sur des outils de reconnaissance des émotions, ce qui conduit à se questionner sur la précision et la similarité, entre outils, des émotions détectées pour conduire l’adaptation. Cet article a pour but de montrer que des résultats similaires peuvent être obtenues à partir de reconnaissance faciale d’émotions afin de montrer l’indépendence aux outils du moteur d’inférence des émotions et plus généralement de l’architecture. En ce sens, l’article présente une expérimentation qui compare trois outils de détection d’émotions
    corecore